26 research outputs found

    Tensor completion in hierarchical tensor representations

    Full text link
    Compressed sensing extends from the recovery of sparse vectors from undersampled measurements via efficient algorithms to the recovery of matrices of low rank from incomplete information. Here we consider a further extension to the reconstruction of tensors of low multi-linear rank in recently introduced hierarchical tensor formats from a small number of measurements. Hierarchical tensors are a flexible generalization of the well-known Tucker representation, which have the advantage that the number of degrees of freedom of a low rank tensor does not scale exponentially with the order of the tensor. While corresponding tensor decompositions can be computed efficiently via successive applications of (matrix) singular value decompositions, some important properties of the singular value decomposition do not extend from the matrix to the tensor case. This results in major computational and theoretical difficulties in designing and analyzing algorithms for low rank tensor recovery. For instance, a canonical analogue of the tensor nuclear norm is NP-hard to compute in general, which is in stark contrast to the matrix case. In this book chapter we consider versions of iterative hard thresholding schemes adapted to hierarchical tensor formats. A variant builds on methods from Riemannian optimization and uses a retraction mapping from the tangent space of the manifold of low rank tensors back to this manifold. We provide first partial convergence results based on a tensor version of the restricted isometry property (TRIP) of the measurement map. Moreover, an estimate of the number of measurements is provided that ensures the TRIP of a given tensor rank with high probability for Gaussian measurement maps.Comment: revised version, to be published in Compressed Sensing and Its Applications (edited by H. Boche, R. Calderbank, G. Kutyniok, J. Vybiral

    Geometric methods on low-rank matrix and tensor manifolds

    Get PDF
    In this chapter we present numerical methods for low-rank matrix and tensor problems that explicitly make use of the geometry of rank constrained matrix and tensor spaces. We focus on two types of problems: The first are optimization problems, like matrix and tensor completion, solving linear systems and eigenvalue problems. Such problems can be solved by numerical optimization for manifolds, called Riemannian optimization methods. We will explain the basic elements of differential geometry in order to apply such methods efficiently to rank constrained matrix and tensor spaces. The second type of problem is ordinary differential equations, defined on matrix and tensor spaces. We show how their solution can be approximated by the dynamical low-rank principle, and discuss several numerical integrators that rely in an essential way on geometric properties that are characteristic to sets of low rank matrices and tensors

    On critical points of quadratic low-rank matrix optimization problems

    Get PDF
    The absence of spurious local minima in certain nonconvex low-rank matrix recovery problems has been of recent interest in computer science, machine learning and compressed sensing since it explains the convergence of some low-rank optimization methods to global optima. One such example is low-rank matrix sensing under restricted isometry properties (RIPs). It can be formulated as a minimization problem for a quadratic function on the Riemannian manifold of low-rank matrices, with a positive semidefinite Riemannian Hessian that acts almost like an identity on low-rank matrices. In this work new estimates for singular values of local minima for such problems are given, which lead to improved bounds on RIP constants to ensure absence of nonoptimal local minima and sufficiently negative curvature at all other critical points. A geometric viewpoint is taken, which is inspired by the fact that the Euclidean distance function to a rank-k matrix possesses no critical points on the corresponding embedded submanifold of rank-k matrices except for the single global minimum

    Modified iterations for data-sparse solution of linear systems

    Get PDF
    A modification of standard linear iterative methods for the solution of linear equations is investigated aiming at improved data-sparsity with respect to a rank function. The convergence speed of the modified method is compared to the rank growth of its iterates for certain model cases. The considered general setup is common in the data-sparse treatment of high dimensional problems such as sparse approximation and low rank tensor calculus

    A gradient sampling method on algebraic varieties and application to nonsmooth low-rank optimization

    No full text

    Convergence Results for Projected Line-Search Methods on Varieties of Low-Rank Matrices Via Łojasiewicz Inequality

    No full text
    Abstract. The aim of this paper is to derive convergence results for projected line-search methods on the real-algebraic variety M≤k of real m × n matrices of rank at most k. Such methods extend successfully used Riemannian optimization methods on the smooth manifold Mk of rank-k matrices to its closure by taking steps along gradient-related directions in the tangent cone, and afterwards projecting back to M≤k. Considering such a method circumvents the difficulties which arise from the non-closedness and the unbounded curvature of Mk. The point-wise convergence is obtained for real-analytic functions on the basis of a Lojasiewicz inequality for the projection of the anti-gradient to the tangent cone. If the derived limit point lies on the smooth part of M≤k, i.e. in Mk, this boils down to more or less known results, but with the benefit that asymptotic convergence rate estimates (for specific step-sizes) can be obtained without an a-priori curvature bound, simply from the fact that the limit lies on a smooth manifold. At the same time, one can give a convincing justification for assuming critical points to lie in Mk: if X is a critical point of f on M≤k, then either X has rank k, or ∇f(X) = 0. Key words. Convergence analysis, line-search methods, low-rank matrices, Riemannian opti-mization, steepest descent, Lojasiewicz gradient inequality, tangent cone

    Computing eigenspaces with low rank constraints

    No full text

    A note on overrelaxation in the Sinkhorn algorithm

    No full text

    Data fusion techniques for the integration of multi-domain genomic data from uveal melanoma

    Get PDF
    Uveal melanoma (UM) is a rare cancer that is well characterized at the molecular level. Two to four classes have been identified by the analyses of gene expression (mRNA, ncRNA), DNA copy number, DNA-methylation and somatic mutations yet no factual integration of these data has been reported. We therefore applied novel algorithms for data fusion, joint Singular Value Decomposition (jSVD) and joint Constrained Matrix Factorization (jCMF), as well as similarity network fusion (SNF), for the integration of gene expression, methylation and copy number data that we applied to the Cancer Genome Atlas (TCGA) UM dataset. Variant features that most strongly impact on definition of classes were extracted for biological interpretation of the classes. Data fusion allows for the identification of the two to four classes previously described. Not all of these classes are evident at all levels indicating that integrative analyses add to genomic discrimination power. The classes are also characterized by different frequencies of somatic mutations in putative driver genes (GNAQ, GNA11, SF3B1, BAP1). Innovative data fusion techniques confirm, as expected, the existence of two main types of uveal melanoma mainly characterized by copy number alterations. Subtypes were also confirmed but are somewhat less defined. Data fusion allows for real integration of multi-domain genomic data

    Existence of dynamical low-rank approximations to parabolic problems

    No full text
    corecore